skip to main content


Search for: All records

Creators/Authors contains: "Kasal, Meghana V"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Online education technologies, such as intelligent tutoring systems, have garnered popularity for their automation. Wh- ether it be automated support systems for teachers (grading, feedback, summary statistics, etc.) or support systems for students (hints, common wrong answer messages, sca old- ing), these systems have built a well rounded support sys- tem for both students and teachers alike. The automation of these online educational technologies, such as intelligent tutoring systems, have often been limited to questions with well structured answers such as multiple choice or ll in the blank. Recently, these systems have begun adopting support for a more diverse set of question types. More speci cally, open response questions. A common tool for developing au- tomated open response tools, such as automated grading or automated feedback, are pre-trained word embeddings. Re- cent studies have shown that there is an underlying bias within the text these were trained on. This research aims to identify what level of unfairness may lie within machine learned algorithms which utilize pre-trained word embed- dings. We attempt to identify if our ability to predict scores for open response questions vary for di erent groups of stu- dent answers. For instance, whether a student who uses fractions as opposed to decimals. By performing a simu- lated study, we are able to identify the potential unfairness within our machine learned models with pre-trained word embeddings. 
    more » « less
  2. Online education technologies, such as intelligent tutoring systems, have garnered popularity for their automation. Wh- ether it be automated support systems for teachers (grading, feedback, summary statistics, etc.) or support systems for students (hints, common wrong answer messages, scaffold- ing), these systems have built a well rounded support sys- tem for both students and teachers alike. The automation of these online educational technologies, such as intelligent tutoring systems, have often been limited to questions with well structured answers such as multiple choice or fill in the blank. Recently, these systems have begun adopting support for a more diverse set of question types. More specifically, open response questions. A common tool for developing au- tomated open response tools, such as automated grading or automated feedback, are pre-trained word embeddings. Re- cent studies have shown that there is an underlying bias within the text these were trained on. This research aims to identify what level of unfairness may lie within machine learned algorithms which utilize pre-trained word embed- dings. We attempt to identify if our ability to predict scores for open response questions vary for different groups of stu- dent answers. For instance, whether a student who uses fractions as opposed to decimals. By performing a simu- lated study, we are able to identify the potential unfairness within our machine learned models with pre-trained word embeddings. 
    more » « less